168 research outputs found

    DETERMINATION OF TRACEABLE GENOTOXIC IMPURITY CHLOROACETYL CHLORIDE A CARCINOGEN BY LC/MS/MS IN DRUG SUBSTANCES

    Get PDF
    ABSTRACTObjective: A simple liquid chromatography-mass spectroscopy/mass spectroscopy (LC/MS/MS) method was developed for determining chloroacetylchloride (CAC), which is possible genotoxic impurity at trace levels in pharmaceutical drug substances.Methods: The suitable LC/MS/MS compatible buffers acetonitrile and ammonium format were used to avoid matrix interferences and for betterdetectability. This method eliminates the solvent extraction and derivatization steps and requires only the direct injection of the sample into theLC/MS.Results: The method is specific, accurate, and critical validation parameters have been done and established the limit of quantitation level of0.003 µg/g.Conclusion: The method possesses the lowest detection level when compared with other methods referred in this journal.Keywords: Liquid chromatography-mass spectroscopy, Chloroacetyl chloride, Active pharmaceutical ingredients, Carcinogenic, Drug substance

    Using Sat solvers for synchronization issues in partial deterministic automata

    Full text link
    We approach the task of computing a carefully synchronizing word of minimum length for a given partial deterministic automaton, encoding the problem as an instance of SAT and invoking a SAT solver. Our experimental results demonstrate that this approach gives satisfactory results for automata with up to 100 states even if very modest computational resources are used.Comment: 15 pages, 3 figure

    High Dimensional Sparse Econometric Models: An Introduction

    Get PDF
    In this chapter we discuss conceptually high dimensional sparse econometric models as well as estimation of these models using L1-penalization and post-L1-penalization methods. Focusing on linear and nonparametric regression frameworks, we discuss various econometric examples, present basic theoretical results, and illustrate the concepts and methods with Monte Carlo simulations and an empirical application. In the application, we examine and confirm the empirical validity of the Solow-Swan model for international economic growth

    Robust statistical frontalization of human and animal faces

    Get PDF
    The unconstrained acquisition of facial data in real-world conditions may result in face images with significant pose variations, illumination changes, and occlusions, affecting the performance of facial landmark localization and recognition methods. In this paper, a novel method, robust to pose, illumination variations, and occlusions is proposed for joint face frontalization and landmark localization. Unlike the state-of-the-art methods for landmark localization and pose correction, where large amount of manually annotated images or 3D facial models are required, the proposed method relies on a small set of frontal images only. By observing that the frontal facial image of both humans and animals, is the one having the minimum rank of all different poses, a model which is able to jointly recover the frontalized version of the face as well as the facial landmarks is devised. To this end, a suitable optimization problem is solved, concerning minimization of the nuclear norm (convex surrogate of the rank function) and the matrix ℓ1 norm accounting for occlusions. The proposed method is assessed in frontal view reconstruction of human and animal faces, landmark localization, pose-invariant face recognition, face verification in unconstrained conditions, and video inpainting by conducting experiment on 9 databases. The experimental results demonstrate the effectiveness of the proposed method in comparison to the state-of-the-art methods for the target problems

    On the interplay between Babai and Černý’s conjectures

    Get PDF
    Motivated by the Babai conjecture and the Černý conjecture, we study the reset thresholds of automata with the transition monoid equal to the full monoid of transformations of the state set. For automata with n states in this class, we prove that the reset thresholds are upperbounded by 2n2 -6n + 5 and can attain the value (Formula presented). In addition, we study diameters of the pair digraphs of permutation automata and construct n-state permutation automata with diameter (formula presented). © Springer International Publishing AG 2017

    Limits of Practical Sublinear Secure Computation

    Get PDF
    Secure computations on big data call for protocols that have sublinear communication complexity in the input length. While fully homomorphic encryption (FHE) provides a general solution to the problem, employing it on a large scale is currently quite far from being practical. This is also the case for secure computation tasks that reduce to weaker forms of FHE such as \u27\u27somewhat homomorphic encryption\u27\u27 or single-server private information retrieval (PIR). Quite unexpectedly, Aggarwal, Mishra, and Pinkas (Eurocrypt 2004), Brickell and Shmatikov (Asiacrypt 2005), and shelat and Venkitasubramaniam (Asiacrypt 2015) have shown that in several natural instances of secure computation on big data, there are practical sublinear communication protocols that only require sublinear local computation and minimize the use of expensive public-key operations. This raises the question of whether similar protocols exist for other natural problems. In this paper we put forward a framework for separating \u27\u27practical\u27\u27 sublinear protocols from \u27\u27impractical\u27\u27 ones, and establish a methodology for identifying \u27\u27provably hard\u27\u27 big-data problems that do not admit practical protocols. This is akin to the use of NP-completeness to separate hard algorithmic problems from easy ones. We show that while the previous protocols of Aggarwal et al., Brickell and Shmatikov, and shelat and Venkitasubramaniam are indeed classified as being \u27\u27practical\u27\u27 in this framework, slight variations of the problems they solve and other natural computational problems on big data are hard. Our negative results are established by showing that the problem at hand is \u27\u27PIR-hard\u27\u27 in the sense that any secure protocol for the problem implies PIR on a large database. This imposes a barrier on the local computational cost of secure protocols for the problem. We also identify a new natural relaxation of PIR that we call semi-PIR, which is useful for establishing \u27\u27intermediate hardness\u27\u27 of several practically motivated secure computation tasks. We show that semi-PIR implies slightly sublinear PIR via an adaptive black-box reduction and that ruling out a stronger black-box reduction would imply a major breakthrough in complexity theory. We also establish information-theoretic separations between semi-PIR and PIR, showing that some problems that we prove to be semi-PIR-hard are not PIR-hard

    Dynamic behavior analysis via structured rank minimization

    Get PDF
    Human behavior and affect is inherently a dynamic phenomenon involving temporal evolution of patterns manifested through a multiplicity of non-verbal behavioral cues including facial expressions, body postures and gestures, and vocal outbursts. A natural assumption for human behavior modeling is that a continuous-time characterization of behavior is the output of a linear time-invariant system when behavioral cues act as the input (e.g., continuous rather than discrete annotations of dimensional affect). Here we study the learning of such dynamical system under real-world conditions, namely in the presence of noisy behavioral cues descriptors and possibly unreliable annotations by employing structured rank minimization. To this end, a novel structured rank minimization method and its scalable variant are proposed. The generalizability of the proposed framework is demonstrated by conducting experiments on 3 distinct dynamic behavior analysis tasks, namely (i) conflict intensity prediction, (ii) prediction of valence and arousal, and (iii) tracklet matching. The attained results outperform those achieved by other state-of-the-art methods for these tasks and, hence, evidence the robustness and effectiveness of the proposed approach

    Epidermal Growth Factor Gene Polymorphism and Risk of Hepatocellular Carcinoma: A Meta-Analysis

    Get PDF
    BACKGROUND: Hepatocarcinogenesis is a complex process that may be influenced by many factors, including polymorphism in the epidermal growth factor (EGF) gene. Previous work suggests an association between the EGF 61*A/G polymorphism (rs4444903) and susceptibility to hepatocellular carcinoma (HCC), but the results have been inconsistent. Therefore, we performed a meta-analysis of several studies covering a large population to address this controversy. METHODS: PubMed, EMBASE, Google Scholar and the Chinese National Knowledge Infrastructure databases were systematically searched to identify relevant studies. Data were abstracted independently by two reviewers. A meta-analysis was performed to examine the association between EGF 61*A/G polymorphism and susceptibility to HCC. Odds ratios (ORs) and 95% confidence intervals (95% CIs) were calculated. RESULTS: Eight studies were chosen in this meta-analysis, involving 1,304 HCC cases (1135 Chinese, 44 Caucasian and 125 mixed) and 2,613 controls (1638 Chinese, 77 Caucasian and 898 mixed). The EGF 61*G allele was significantly associated with increased risk of HCC based on allelic contrast (OR = 1.29, 95% CI = 1.16-1.44, p<0.001), homozygote comparison (OR = 1.79, 95% CI = 1.39-2.29, p<0.001) and a recessive genetic model (OR = 1.34, 95% CI = 1.16-1.54, p<0.001), while patients carrying the EGF 61*A/A genotype had significantly lower risk of HCC than those with the G/A or G/G genotype (A/A vs. G/A+G/G, OR = 0.66, 95% CI = 0.53-0.83, p<0.001). CONCLUSION: The 61*G polymorphism in EGF is a risk factor for hepatocarcinogenesis while the EGF 61*A allele is a protective factor. Further large and well-designed studies are needed to confirm this conclusion
    corecore